Pruning-aware Sparse Regularization for Network Pruning
نویسندگان
چکیده
Structural neural network pruning aims to remove the redundant channels in deep convolutional networks (CNNs) by filters of less importance final output accuracy. To reduce degradation performance after pruning, many methods utilize loss with sparse regularization produce structured sparsity. In this paper, we analyze these sparsity-training-based and find that unpruned is unnecessary. Moreover, it restricts network’s capacity, which leads under-fitting. solve problem, propose a novel method, named MaskSparsity, pruning-aware regularization. MaskSparsity imposes fine-grained on specific selected mask, rather than all model. Before MaskSparity, can use get such as running global achieves 63.03% float point operations (FLOPs) reduction ResNet-110 removing 60.34% parameters, no top-1 accuracy CIFAR-10. On ILSVRC-2012, reduces more 51.07% FLOPs ResNet-50, only 0.76% The code paper released at https://github.com/CASIA-IVA-Lab/MaskSparsity . We have also integrated into self-developed PyTorch toolkit, EasyPruner, https://gitee.com/casia_iva_engineer/easypruner
منابع مشابه
Pruning from Adaptive Regularization
Inspired by the recent upsurge of interest in Bayesian methods we consider adaptive regularization. A generalization based scheme for adaptation of regularization parameters is introduced and compared to Bayesian regularization. We show that pruning arises naturally within both adaptive regularization schemes. As model example we have chosen the simplest possible: estimating the mean of a rando...
متن کاملNeural Network Pruning and Pruning Parameters
The default multilayer neural network topology is a fully interlayer connected one. This simplistic choice facilitates the design but it limits the performance of the resulting neural networks. The best-known methods for obtaining partially connected neural networks are the so called pruning methods which are used for optimizing both the size and the generalization capabilities of neural networ...
متن کاملRegularization with a Pruning Prior
We investigate the use of a regularization prior and its pruning properties. We illustrate the behavior of this prior by conducting analyses both using a Bayesian framework and with the generalization method, on a simple toy problem. Results are thoroughly compared with those obtained with a traditional weight decay. Copyright 1997 Elsevier Science Ltd.
متن کاملLearning Sparse Structured Ensembles with SG-MCMC and Network Pruning
An ensemble of neural networks is known to be more robust and accurate than an individual network, however usually with linearly-increased cost in both training and testing. In this work, we propose a two-stage method to learn Sparse Structured Ensembles (SSEs) for neural networks. In the first stage, we run SG-MCMC with group sparse priors to draw an ensemble of samples from the posterior dist...
متن کاملGreedy Sparse Signal Recovery with Tree Pruning
Recently, greedy algorithm has received much attention as a cost-effective means to reconstruct the sparse signals from compressed measurements. Much of previous work has focused on the investigation of a single candidate to identify the support (index set of nonzero elements) of the sparse signals. Wellknown drawback of the greedy approach is that the chosen candidate is often not the optimal ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Machine Intelligence Research
سال: 2023
ISSN: ['2731-538X', '2731-5398']
DOI: https://doi.org/10.1007/s11633-022-1353-0